Tuning Technique for Multiple Precision Dense Matrix Multiplication using Prediction of Computational Time

نویسنده

  • Tomonori Kouya
چکیده

Although reliable long precision floating-point arithmetic libraries such as QD and MPFR/GMP are necessary to solve ill-conditioned problems in numerical simulation, long precision BLAS-level computation such as matrix multiplication has not been fully optimized because tuning costs are very high compared to IEEE float and double precision arithmetic. In this study, we develop a technique to shorten this tuning time by using prediction of computational times in several block sizes for the blocking algorithm, and then selecting the fastest matrix multiplication method for tuning multiple precision dense real matrix multiplication in various precisions, matrix sizes, and degrees of parallelization.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Performance evaluation of multiple precision matrix multiplications using parallelized Strassen and Winograd algorithms

It is well known that Strassen and Winograd algorithms can reduce the computational costs associated with dense matrix multiplication. We have already shown that they are also very effective for software-based multiple precision floating-point arithmetic environments such as the MPFR/GMP library. In this paper, we show that we can obtain the same effectiveness for double-double (DD) and quadrup...

متن کامل

Auto-tuning Dense Vector and Matrix-Vector Operations for Fermi GPUs

In this paper, we consider the automatic performance tuning of dense vector and matrix-vector operations on GPUs. Such operations form the backbone of level 1 and level 2 routines in the Basic Linear Algebra Subroutines (BLAS) library and are therefore of great importance in many scientific applications. As examples, we develop single-precision CUDA kernels for the euclidian norm (SNRM2) and th...

متن کامل

High-Performance Matrix-Vector Multiplication on the GPU

In this paper, we develop a high-performance GPU kernel for one of the most popular dense linear algebra operations, the matrixvector multiplication. The target hardware is the most recent Nvidia Tesla 20-series (Fermi architecture), which is designed from the ground up for scientific computing. We show that it is essentially a matter of fully utilizing the fine-grained parallelism of the many-...

متن کامل

A model for the "Fuzzy TOPSIS" based on Zadde's extension principle

The TOPSIS process is one of the most comprehensive systems designed for decision making with multiple criteria, since this technique enables formulation of the problem as decision matrix, as well as the possibility of considering different quantitative and qualitative criteria in the problem. Fuzzy TOPSIS methods have been introduced to make fundamental decisions that make decisions decisions ...

متن کامل

Auto-tuning of level 1 and level 2 BLAS for GPUs

The use of high performance libraries for dense linear algebra operations is of great importance in many numerical scientific applications. The most common operations form the backbone of the Basic Linear Algebra Subroutines (BLAS) library. In this paper, we consider the performance and auto-tuning of level 1 and level 2 BLAS routines on GPUs. As examples, we develop single-precision CUDA kerne...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1710.01839  شماره 

صفحات  -

تاریخ انتشار 2017